Minimum Excess Risk in Bayesian Learning

نویسندگان

چکیده

We analyze the best achievable performance of Bayesian learning under generative models by defining and upper-bounding minimum excess risk (MER): gap between expected loss attainable from data that could be achieved if model realization were known. The definition MER provides a principled way to define different notions uncertainties in learning, including aleatoric uncertainty epistemic uncertainty. Two methods for deriving upper bounds are presented. first method, generally suitable with parametric model, upper-bounds conditional mutual information parameters quantity being predicted given observed data. It allows us quantify rate at which decays zero as more becomes available. Under realizable models, this method also relates richness function class, notably VC dimension binary classification. second particularly predictive estimation error via various continuity arguments. extend analysis setting multiple families nonparametric models. Along discussions we draw some comparisons frequentist learning.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Excess Risk Bounds for Multi-Task Learning

The idea that it should be easier to learn several tasks if they are related in some way is quite intuitive and has been found to work in many practical settings. There has been some interest in obtaining theoretical results to better understand this phenomenon (e.g. [3, 4]). Maurer [4] considers the case when the “relatedness” of the tasks is captured by requiring that all tasks share a common...

متن کامل

Rademacher Complexities and Bounding the Excess Risk in Active Learning

Sequential algorithms of active learning based on the estimation of the level sets of the empirical risk are discussed in the paper. Localized Rademacher complexities are used in the algorithms to estimate the sample sizes needed to achieve the required accuracy of learning in an adaptive way. Probabilistic bounds on the number of active examples have been proved and several applications to bin...

متن کامل

Learning Bayesian network classifiers by risk minimization

Article history: Received 22 June 2011 Received in revised form 1 October 2011 Accepted 24 October 2011 Available online 29 October 2011

متن کامل

Excess risk bounds for multitask learning with trace norm regularization

Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be infinite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on t...

متن کامل

Learning Bayesian Network Structure using Markov Blanket in K2 Algorithm

‎A Bayesian network is a graphical model that represents a set of random variables and their causal relationship via a Directed Acyclic Graph (DAG)‎. ‎There are basically two methods used for learning Bayesian network‎: ‎parameter-learning and structure-learning‎. ‎One of the most effective structure-learning methods is K2 algorithm‎. ‎Because the performance of the K2 algorithm depends on node...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2022

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2022.3176056